28 research outputs found

    Linguistic dimensions of second language accent and comprehensibility:Nonnative listeners' perspectives

    Get PDF
    The current study investigated the effect of listener status (native, nonnative) and language background (French, Mandarin) on global ratings of second language speech. Twenty-six nonnative English listeners representing the two language backgrounds (n = 13 each) rated the comprehensibility and accentedness of 40 French speakers of English. These same speakers were previously rated by native listeners and coded for 19 linguistic measures of speech (e.g., segmental errors, word stress errors, grammar accuracy) in Trofimovich and Isaacs (2012). Analyses indicated no difference in global ratings between nonnative and native listeners, or between the two nonnative listener groups. Similarly, no major differences in the linguistic dimensions associated with each group’s ratings existed. However, analyses of verbal reports for a subset of nonnative listeners (n = 5 per group) demonstrated that each group attributed their ratings to somewhat different linguistic cues

    Second Language Comprehensibility Revisited: Investigating the Effects of Learner Background

    Get PDF
    The current study investigated first language (L1) effects on listener judgment of comprehensibility and accentedness in second language (L2) speech. The participants were 60 university-level adult speakers of English from four L1 backgrounds (Chinese, Romance, Hindi, Farsi), with 15 speakers per group, performing a picture narrative task. Ten native English listeners used continuous sliding scales to evaluate the speakers’ audio recordings for comprehensibility, accentedness, as well as 10 linguistic variables drawn from the domains of pronunciation, fluency, lexis, grammar, and discourse. While comprehensibility was associated with several linguistic variables (segmentals, prosody, fluency, lexis, grammar), accentedness was primarily linked to pronunciation (segmentals, word stress, intonation). The relative strength of these associations also varied as a function of the speakers’ L1, especially for comprehensibility, with Chinese speakers influenced chiefly by pronunciation variables (specifically segmental errors), Hindi speakers by lexicogrammar variables, Romance speakers by variables spanning both pronunciation and lexicogrammar domains, and Farsi speakers showing no strong association with any linguistic variable. These results overall suggest that speakers’ L1 plays an important role in listener judgments of L2 comprehensibility and that instructors aiming to promote L2 speakers’ communicative success may need to expand their teaching targets beyond segmentals to include prosody-, fluency-, and lexicogrammar-based targets

    Flawed self-assessment: investigating self- and other-perception of second language speech

    Get PDF
    This study targeted the relationship between self- and other-assessment of accentedness and comprehensibility in second language (L2) speech, extending prior social and cognitive research documenting weak or non-existing links between people's self-assessment and objective measures of performance. Results of two experiments (N = 134) revealed mostly inaccurate self-assessment: speakers at the low end of the accentedness and comprehensibility scales overestimated their performance; speakers at the high end of each scale underestimated it. For both accent and comprehensibility, discrepancies in self- versus other-assessment were associated with listener-rated measures of phonological accuracy and temporal fluency but not with listener-rated measures of lexical appropriateness and richness, grammatical accuracy and complexity, or discourse structure. Findings suggest that inaccurate self-assessment is linked to the inherent complexity of L2 perception and production as cognitive skills and point to several ways of helping L2 speakers align or calibrate their self-assessment with their actual performance

    A methodological synthesis and meta-analysis of judgment tasks in second language research

    Get PDF
    Judgment tasks (JTs, often called acceptability or grammaticality judgment tasks) are found extensively throughout the history of second language (L2) research. Data from such instruments have been used to investigate a range of hypotheses and phenomena, from generativist theories to instructional effectiveness. Though popular and convenient, JTs have engendered considerable controversy, with concerns often centered on their construct validity in terms of the type of representations they elicit, such as implicit or explicit knowledge. A number of studies have also examined the impact of JT conditions such as timed vs. untimed, oral vs. written. This article presents a synthesis of the use of JTs and a meta-analysis of the effects of task conditions on learner performance. Following a comprehensive search, 385 JTs were found in 302 individual studies. Each report was coded for features related to study design as well as methodological, procedural, and psychometric properties of the JTs. These data were synthesized in order to understand how this type of instrument has been implemented and reported. In addition to observing a steady increase in the use of JTs over the last four decades, we also found many of the features of JTs, when reported, varied substantially across studies. In terms of the impact of JT design, whereas modality was not found to have a strong or stable effect on learner performance (median d =.14; interquartile range = 1.04), scores on untimed JTs tended to be substantially higher than when timed (d = 1.35; interquartile range = 1.74). In examining these features and their links to findings, this article builds on a growing body of methodological syntheses of L2 research instrumentation and makes a number of empirically grounded recommendations for future studies involving JTs

    The Implementation of ISLA in MALL Technology: An Investigation into the Potential Effectiveness of Duolingo

    Get PDF
    Following the increased implementation of mobile learning across the globe, specifically in the area of mobile-assisted language learning (MALL; Burston, 2015; Duman et al., 2015), the current paper provides an evaluation of the highly popular MALL application Duolingo. Specifically, this evaluation targets how effectively instructed second language acquisition (ISLA) research and theory has been implemented by Duolingo programmers. While current frameworks for the evaluation of MALL technology (e.g., Reinders & Pegrum, 2015) place a significant focus on the learning affordances available, less emphasis has been placed on the implementation of ISLA theory. As such, Chapelle’s (2001) evaluation framework, originally developed for computer-assisted language learning programs, is revisited due to its basis in ISLA theory. Six criteria thus serve as the basis of this evaluation: Language Learning Potential, Meaning Focus, Authenticity, Learner Fit, Positive Impact, and Practicality. While certain benefits of Duolingo as a language learning tool are discussed, overall the evaluation indicates that the benefit of Duolingo is more likely as a learning support app than as the sole tool for autonomous learning

    Nicolas PĂ©lissier, Marc Marti, dirs, Le storytelling. SuccĂšs des histoires, histoire d’un succĂšs

    Get PDF
    L’ouvrage est un ensemble de textes qui ont Ă©tĂ© soumis Ă  l’évaluation par un comitĂ© scientifique Ă  l’issue de la journĂ©e d’étude ayant eu lieu Ă  l’universitĂ© Nice Sophia Antipolis en novembre 2011 : « Du storytelling Ă  la mise en rĂ©cit des mondes sociaux : la rĂ©volution narrative a-t-elle eu lieu ? ». Onze contributions y sont prĂ©sentĂ©es. AprĂšs une introduction de Marc Marti et Nicolas PĂ©lissier (pp. 11-22) prĂ©sentant trĂšs briĂšvement l’histoire de la controverse sur le storytelling et regroup..

    Review of Inclusive Education and Digital Technologies

    No full text

    Oral Communication for Language Teachers: Assessment Rubric Development

    Get PDF
    Scholarship on language teacher education emphasizes teaching language skills. Yet preservice language teachers must learn more than grammar or how to develop reading, writing, listening, and speaking skills. To become effective classroom presenters, teachers should develop skills such as introducing new concepts, facilitating class discussions, interacting with students, and providing feedback. Such skills encompass oral communication needs vital to language teaching. Our poster describes a project to develop a rubric for two undergraduate language teacher education courses that carry the university’s general education requirement in oral communication. Our process began with an online workshop wherein participants (graduate students and faculty who had taught the two courses and undergraduate students who had taken the courses) brainstormed the qualities of effective communication in language teaching. In a second workshop, participants created a rubric for assessing course assignments. During this 2020/2021 academic year, we have piloted the rubric in the two target courses
    corecore